63 research outputs found

    Building nonparametric nn-body force fields using Gaussian process regression

    Full text link
    Constructing a classical potential suited to simulate a given atomic system is a remarkably difficult task. This chapter presents a framework under which this problem can be tackled, based on the Bayesian construction of nonparametric force fields of a given order using Gaussian process (GP) priors. The formalism of GP regression is first reviewed, particularly in relation to its application in learning local atomic energies and forces. For accurate regression it is fundamental to incorporate prior knowledge into the GP kernel function. To this end, this chapter details how properties of smoothness, invariance and interaction order of a force field can be encoded into corresponding kernel properties. A range of kernels is then proposed, possessing all the required properties and an adjustable parameter nn governing the interaction order modelled. The order nn best suited to describe a given system can be found automatically within the Bayesian framework by maximisation of the marginal likelihood. The procedure is first tested on a toy model of known interaction and later applied to two real materials described at the DFT level of accuracy. The models automatically selected for the two materials were found to be in agreement with physical intuition. More in general, it was found that lower order (simpler) models should be chosen when the data are not sufficient to resolve more complex interactions. Low nn GPs can be further sped up by orders of magnitude by constructing the corresponding tabulated force field, here named "MFF".Comment: 31 pages, 11 figures, book chapte

    Machine-learning of atomic-scale properties based on physical principles

    Full text link
    We briefly summarize the kernel regression approach, as used recently in materials modelling, to fitting functions, particularly potential energy surfaces, and highlight how the linear algebra framework can be used to both predict and train from linear functionals of the potential energy, such as the total energy and atomic forces. We then give a detailed account of the Smooth Overlap of Atomic Positions (SOAP) representation and kernel, showing how it arises from an abstract representation of smooth atomic densities, and how it is related to several popular density-based representations of atomic structure. We also discuss recent generalisations that allow fine control of correlations between different atomic species, prediction and fitting of tensorial properties, and also how to construct structural kernels---applicable to comparing entire molecules or periodic systems---that go beyond an additive combination of local environments

    Big-Data-Driven Materials Science and its FAIR Data Infrastructure

    Get PDF
    This chapter addresses the forth paradigm of materials research -- big-data driven materials science. Its concepts and state-of-the-art are described, and its challenges and chances are discussed. For furthering the field, Open Data and an all-embracing sharing, an efficient data infrastructure, and the rich ecosystem of computer codes used in the community are of critical importance. For shaping this forth paradigm and contributing to the development or discovery of improved and novel materials, data must be what is now called FAIR -- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets the stage for advances of methods from artificial intelligence that operate on large data sets to find trends and patterns that cannot be obtained from individual calculations and not even directly from high-throughput studies. Recent progress is reviewed and demonstrated, and the chapter is concluded by a forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W. Andreoni), Springer 2018/201

    Database-driven High-Throughput Calculations and Machine Learning Models for Materials Design

    Full text link
    This paper reviews past and ongoing efforts in using high-throughput ab-inito calculations in combination with machine learning models for materials design. The primary focus is on bulk materials, i.e., materials with fixed, ordered, crystal structures, although the methods naturally extend into more complicated configurations. Efficient and robust computational methods, computational power, and reliable methods for automated database-driven high-throughput computation are combined to produce high-quality data sets. This data can be used to train machine learning models for predicting the stability of bulk materials and their properties. The underlying computational methods and the tools for automated calculations are discussed in some detail. Various machine learning models and, in particular, descriptors for general use in materials design are also covered.Comment: 19 pages, 2 figure

    Characterization of inverted coaxial 76 Ge detectors in GERDA for future double- β decay experiments

    Get PDF
    Neutrinoless double-β decay of 76Ge is searched for with germanium detectors where source and detector of the decay are identical. For the success of future experiments it is important to increase the mass of the detectors. We report here on the characterization and testing of five prototype detectors manufactured in inverted coaxial (IC) geometry from material enriched to 88% in 76Ge. IC detectors combine the large mass of the traditional semi-coaxial Ge detectors with the superior resolution and pulse shape discrimination power of point contact detectors which exhibited so far much lower mass. Their performance has been found to be satisfactory both when operated in vacuum cryostat and bare in liquid argon within the Gerda setup. The measured resolutions at the Q-value for double-β decay of 76Ge (Qββ = 2039 keV) are about 2.1 keV full width at half maximum in vacuum cryostat. After 18 months of operation within the ultra-low background environment of the GERmanium Detector Array (Gerda) experiment and an accumulated exposure of 8.5 kg⋅year, the background index after analysis cuts is measured to be 4.9+7.3−3.4×10−4 counts/(keV⋅kg⋅year) around Qββ. This work confirms the feasibility of IC detectors for the next-generation experiment Legend

    Search for tri-nucleon decays of ^{76}Ge in GERDA

    Get PDF
    We search for tri-nucleon decays of 76Ge in the dataset from the GERmanium Detector Array (GERDA) experiment. Decays that populate excited levels of the daughter nucleus above the threshold for particle emission lead to disintegration and are not considered. The ppp-, ppn-, and pnn-decays lead to 73Cu, 73Zn, and 73Ga nuclei, respectively. These nuclei are unstable and eventually proceed by the beta decay of 73Ga to 73Ge (stable). We search for the 73Ga decay exploiting the fact that it dominantly populates the 66.7 keV 73mGa state with half-life of 0.5 s. The nnn-decays of 76Ge that proceed via 73mGe are also included in our analysis. We find no signal candidate and place a limit on the sum of the decay widths of the inclusive tri-nucleon decays that corresponds to a lower lifetime limit of 1.2×1026 yr  (90% credible interval). This result improves previous limits for tri-nucleon decays by one to three orders of magnitude

    Pulse shape analysis in GERDA Phase II

    Get PDF
    The GERmanium Detector Array (GERDA) collaboration searched for neutrinoless double-\beta decay in ^{76}Ge using isotopically enriched high purity germanium detectors at the Laboratori Nazionali del Gran Sasso of INFN. After Phase I (2011–2013), the experiment benefited from several upgrades, including an additional active veto based on LAr instrumentation and a significant increase of mass by point-contact germanium detectors that improved the half-life sensitivity of Phase II (2015–2019) by an order of magnitude. At the core of the background mitigation strategy, the analysis of the time profile of individual pulses provides a powerful topological discrimination of signal-like and background-like events. Data from regular ^{228}Th calibrations and physics data were both considered in the evaluation of the pulse shape discrimination performance. In this work, we describe the various methods applied to the data collected in GERDA Phase II corresponding to an exposure of 103.7 kg year. These methods suppress the background by a factor of about 5 in the region of interest around Q_{\beta \beta }= 2039 keV, while preserving (81\pm 3)% of the signal. In addition, an exhaustive list of parameters is provided which were used in the final data analysis

    Final Results of GERDA on the Search for Neutrinoless Double-β Decay

    Get PDF
    The GERmanium Detector Array (GERDA) experiment searched for the lepton-number-violating neutrinoless double-β (0νββ) decay of ^{76}Ge, whose discovery would have far-reaching implications in cosmology and particle physics. By operating bare germanium diodes, enriched in ^{76}Ge, in an active liquid argon shield, GERDA achieved an unprecedently low background index of 5.2×10^{-4} counts/(keV kg yr) in the signal region and met the design goal to collect an exposure of 100 kg yr in a background-free regime. When combined with the result of Phase I, no signal is observed after 127.2 kg yr of total exposure. A limit on the half-life of 0νββ decay in ^{76}Ge is set at T_{1/2}>1.8×10^{26}  yr at 90% C.L., which coincides with the sensitivity assuming no signal

    Liquid argon light collection and veto modeling in GERDA Phase II

    Get PDF
    The ability to detect liquid argon scintillation light from within a densely packed high-purity germanium detector array allowed the Gerda experiment to reach an exceptionally low background rate in the search for neutrinoless double beta decay of 76 Ge. Proper modeling of the light propagation throughout the experimental setup, from any origin in the liquid argon volume to its eventual detection by the novel light read-out system, provides insight into the rejection capability and is a necessary ingredient to obtain robust background predictions. In this paper, we present a model of the Gerda liquid argon veto, as obtained by Monte Carlo simulations and constrained by calibration data, and highlight its application for background decomposition
    corecore